Little Rocket Experiment

I thought to show you some steps you could take while using the restyle function of AI Shots.

This is a little rocket I once made using SketchUp (don’t shoot me). It has terrible translucent materials assigned in KS which I had done also ages ago. But it will be the base of the experiment.

This is the rocket after I pushed KS into imagining this rocket on a rocky planet. As you see it gives a nice cartoony look and the rocks which are actually modelled keep in place but got way more lively. The background however is created with the prompt.

This is the same image but upscaled 4x (to 4096x4096) using GigaPixel Pro AI. Often this will even result in a higher quality than the original by it’s pretty clever functions.

GigaPixel Pro AI has also another trick in you can use, it can actually add detail to a picture. You will basically give a prompt again and it will consider that prompt in how/when/where it will add details.

As you see the result is nothing like the original image and it could be some kind of illustration in some SciFi story.

Next I’ve exactly the same rocket but instead of using the existing landscape and rocky bottom I let KS AI Shots create it. This gave me the following result:

Pretty cool, and I added a steampunk keyword so the materials of the rocket got a bit pushed in that direction.

Above again the 4x upscaled version of the area and you see the details are actually sharper and more visible than in the original.

Time for some extra AI using GigaPixel with the following result:

While there are obviously some issues with the depth of field which was in the input picture it’s really cool what it has done to the rocket, the rocky sand and the background.

As Maura wrote, if you put experiments here, they don’t have to be final artwork. When I saw Dries and Jan playing around with the AI Shots features, I got really enthusiastic about trying things myself.

With this experiment I used the upscaling of GigaPixel Pro AI and must say I was pretty amazed how the final images looked compared to the original material with terrible materials. You could do exactly the same as GigaPixel using locally running AI.

None of the images went through Photoshop etc.

4 Likes

Awesome example, thanks for not just posting your final result but actually explaining how and why you were using the different new functions we offer. Looks great!

2 Likes

Nice Oscar!

I definitely think that using AI enhancers is very interesting. Out of all the ways AI is being used by 3D artists, this is one I actually like quite a bit for product visualization. It’s hard to get true photo-real levels of detail in a KeyShot scene as there are so many tiny details down to the sub-pixel level required.

Of course, there’s the issue of repeatability. But that aside, I do really like what you showed here and I do think that it’d be handy to have something like this in KS.

1 Like

Thanks Will!

As you mention, AI based on stable diffusion won’t be precise enough for the actual products. Mainly because it’s a kind of probability calculation and new products won’t have much context, and AI simply doesn’t know what it is or should be.

I would love to have a bit of in-between option of reimagine and background. Currently background really does a background which makes the object feel disconnected in my early tests. The re-imagine option works really well but also changes the object although it keeps the outlines so there’s always Photoshop.

If users would be able to use for example prefix for a part of the object name you could have the ability to arrange/style background items with simple low polygon models like a plant, some markers, a keyboard etc on a desk while the actual product which is also on the desk like for example headphones would remain untouched. That way you will have your perfect 3d product in a maybe less perfect AI environment.

I was trying to add props to a 3d model of a 60s radio and I can put a cube below it so it knows where a table would be and put a lamp next to it so it knows there should be a lamp. But reimagine would change the radio as well and background won’t touch the table + lamp. A combination would be nice.

Not really tested but I think you can work around it by using the AI generated image simply as a background. Basically the opposite as what Dries did in the Office Hours meeting where he used the AI image and projected it on the object itself.

It’s great fun to play with and I never thought that the little rocket would ever look like this. I once started learning 3D modelling because I do have ideas for new products, architecture etc as a hobby, just lack the needed drawing skills. So think next thing would trying to do the same with some designs of houses I once did :wink:

Hi Oscar, great points and ideas! You can totally use a restyle image as backplate (as long as your camera position stays fixed of course). I use that workflow all the time. It somehow grounds objects in the background better - and yes, we are looking into why that is and how we can improve the behavior in background mode.

1 Like